AI’s regulation naysayers protest too much

您所在的位置:网站首页 much too too much的用法 AI’s regulation naysayers protest too much

AI’s regulation naysayers protest too much

2023-06-28 11:29| 来源: 网络整理| 查看: 265

LONDON, June 21 (Reuters Breakingviews) - The kingpins of artificial intelligence (AI) are heading for a supervisory showdown. Lawmakers this month voted on the European Union’s AI Act in one of the first significant attempts to regulate the nascent sector. Other countries are also scrambling to design rules. Though industry leaders like OpenAI boss Sam Altman warn some approaches are too onerous, the risks of pandering to special pleading exceed the dangers of stifling a new technology.

AI is developing so rapidly that there’s not even a consensus on whether it needs specific regulations. As venture capitalist Marc Andreessen notes, AI models are made up of codes and algorithms just like other computer programmes. What distinguishes them, however, is that they don’t mechanically follow instructions set by humans, but draw conclusions independently. That’s how AI can empower photo-generating tools to create pictures that look like real people. Moreover, AI models’ ability to extrapolate from data at high speed explains why almost every industry is deploying the technology in the hope of boosting productivity.

loading

These advantages are also the reasons AI might create big problems. Training models based on specific data sets can replicate historical human biases, skewing mortgage approvals or job applications. AI’s ability to learn from freely available data also poses big questions about violations of copyright law. And the potential for technology to rapidly displace large numbers of jobs understandably makes politicians nervous.

Constructing guardrails is not straightforward. Precisely because AI models can replicate tasks done by humans, it’s harder to spot which is which. The model’s ability to make apparently independent decisions also blurs lines of responsibility - and makes it harder for the designer, or an official body, to monitor it.

Despite these challenges, governments have agreed to some general principles. A 2019 agreement brokered by the Organisation for Economic Co-operation and Development specified that AI should be transparent, robust, accountable and secure. Beyond these generalities, however, there’s widespread disagreement over what counts as AI, what regulators should try to solve for, and to what extent they need to enforce their objectives.

Policymakers have divided into different camps. At one end of the spectrum are the EU, China and Canada, which are trying to construct a new regulatory architecture. At the other end are India and the United Kingdom, whose white paper in April effectively said AI doesn’t require any special regulation beyond a set of principles similar to those articulated by the OECD. Somewhere in between is the United States, where President Joe Biden has proposed an AI Bill of Rights while Congress is still debating the need for targeted rules. Such divergence suggests the world is unlikely to see a global AI regulator, an idea Altman suggested to Congress on May 16.

The EU’s proposed law advocates placing AI applications into four different buckets. A minority of “unacceptable risk” uses of AI, such as real-time facial recognition for surveillance of citizens, will be banned. The majority will be deemed limited, low-risk and subject to minimal oversight. AI systems that could be used to influence voters and the outcome of elections and systems used by social media platforms with over 45 million users are labeled “high-risk”. Altman’s complaint is that general-purpose AI systems, including OpenAI’s ChatGPT tool, are the basis of these applications which would make them accountable for these risks.

He has a point. The EU law will require “high risk” applications to reveal content generated by the technology, publish summaries of the copyrighted data used to do so, and punish companies which make inadequate disclosures with fines worth up to 7% of their total revenue. That seems excessive for applications like ChatGPT, which are mainly used for summarising documents and helping to write code. The exacting standards could also discourage smaller companies or non-profit organisations from developing general-purpose AI systems, limiting innovation and limiting rivals to industry behemoths like Microsoft-backed (MSFT.O) OpenAI or Google owner Alphabet (GOOGL.O). That in turn could stifle positive uses of AI in areas such as drug development or designing semiconductors, or ensure that countries with a lighter regulatory touch reap the benefits of innovation.

Yet behind the new-fangled and possibly game-changing technology is a familiar arm-wrestling contest between regulators and big technology firms. Watchdogs in Brussels and the United States are engaged in a belated attempt to limit the power of giants like Alphabet, Microsoft and Facebook owner Meta Platforms (META.O). And even the EU’s proposed regulation effectively lets AI practitioners mark their own homework. Given the speed of innovation and the risk of ChatGPT and other generative AI models spawning more problematic applications, the threat of too much regulation is less daunting than the alternative.

Follow @karenkkwok on Twitter

CONTEXT NEWS

European Union lawmakers on June 14 agreed changes to draft artificial intelligence (AI) rules to include a ban on the use of the technology in biometric surveillance, and to mandate that generative AI systems like the ChatGPT chatbot must disclose AI-generated content.

Among other changes, European Union lawmakers want any company using generative tools to disclose copyrighted material used to train its systems and for companies working on "high-risk” applications to do a fundamental rights impact assessment and evaluate environmental impact. Systems like ChatGPT would have to disclose that the content was AI-generated, help distinguish so-called deep-fake images from real ones and ensure safeguards against illegal content.

OpenAI CEO Sam Altman said during a speech in Paris that the creator of ChatGPT wants “to make sure it is able to comply” in discussions with European regulators, Bloomberg reported on May 26.

On the same day Altman said that the company has no plans to leave Europe, reversing a threat made earlier that week to leave the region if it becomes too hard to comply with upcoming laws on artificial intelligence. “We are excited to continue to operate here and of course have no plans to leave,” he said on Twitter.

Editing by Peter Thal Larsen, George Hay and Katrina Hamlin

Our Standards: The Thomson Reuters Trust Principles.

Opinions expressed are those of the author. They do not reflect the views of Reuters News, which, under the Trust Principles, is committed to integrity, independence, and freedom from bias.


【本文地址】


今日新闻


推荐新闻


CopyRight 2018-2019 办公设备维修网 版权所有 豫ICP备15022753号-3